Fraud’s Bargain Attacks to Textual Classifiers via Metropolis-Hasting Sampling (Student Abstract)
نویسندگان
چکیده
Recent studies on adversarial examples expose vulnerabilities of natural language processing (NLP) models. Existing techniques for generating are typically driven by deterministic heuristic rules that agnostic to the optimal examples, a strategy often results in attack failures. To this end, research proposes Fraud's Bargain Attack (FBA), which utilizes novel randomization mechanism enlarge searching space and enables high-quality be generated with high probabilities. FBA applies Metropolis-Hasting algorithm enhance selection from all candidates proposed customized Word Manipulation Process (WMP). WMP perturbs one word at time via insertion, removal, or substitution contextual-aware manner. Extensive experiments demonstrate outperforms baselines terms success rate imperceptibility.
منابع مشابه
Metropolis Sampling
Monte Carlo (MC) sampling methods are widely applied in Bayesian inference, system simulation and optimization problems. The Markov Chain Monte Carlo (MCMC) algorithms are a well-known class of MC methods which generate a Markov chain with the desired invariant distribution. In this document, we focus on the Metropolis-Hastings (MH) sampler, which can be considered as the atom of the MCMC techn...
متن کاملMetropolis-Hastings sampling of paths
We consider the previously unsolved problem of sampling cycle-free paths according to a given distribution from a general network. The problem is difficult because of the combinatorial number of alternatives, which prohibits a complete enumeration of all paths and hence also forbids to compute the normalizing constant of the sampling distribution. The problem is important because the ability to...
متن کاملQuery-limited Black-box Attacks to Classifiers
We study black-box attacks on machine learning classifiers where each query to the model incurs some cost or risk of detection to the adversary. We focus explicitly on minimizing the number of queries as a major objective. Specifically, we consider the problem of attacking machine learning classifiers subject to a budget of feature modification cost while minimizing the number of queries, where...
متن کاملLearning Probabilistic Linear-Threshold Classifiers via Selective Sampling
In this paper we investigate selective sampling, a learning model where the learner observes a sequence of i.i.d. unlabeled instances each time deciding whether to query the label of the current instance. We assume that labels are binary and stochastically related to instances via a linear probabilistic function whose coefficients are arbitrary and unknown. We then introduce a new selective sam...
متن کاملImbalanced Datasets: from Sampling to Classifiers
Classification is one of the most fundamental tasks in the machine learning and data-mining communities. One of the most common challenges faced when trying to perform classification is the class imbalance problem. A dataset is considered imbalanced if the class of interest (positive or minority class) is relatively rare as compared to the other classes (negative or majority classes). As a resu...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2023
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v37i13.27005